-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add conditional coverage flags for accurate platform coverage #1260
Conversation
fb879f8
to
bd2e2d0
Compare
ad60123
to
80a1cda
Compare
100% coverage on all platforms/versions. Still need to ensure everything is coherent and probably some docs updates. @freakboy3742, if you have a second, curious if you have thoughts on this failure? maybe just the fact this is a 3.12 alpha? https://github.com/beeware/briefcase/actions/runs/4889292359/jobs/8727757471
|
My guess would be that this is a variation of the nedbat/coveragepy#1572 problem, but I'm not clear why that's a problem here. |
uses: actions/[email protected] | ||
with: | ||
name: coverage-data | ||
path: ".coverage.*" | ||
if-no-files-found: ignore | ||
|
||
- name: Report platform coverage | ||
run: tox -e coverage | ||
coverage-platform: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If what we're generating is a "platform and python version" specific coverage report, and it will pass with 100% on every platform/python combination... is there any reason to not tag this to the end of the testing pass for a platform and python, rather than doing it as a separate step with it's own matrix fanout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point. Organic evolution of this PR has definitely influenced this....however, I did want to ensure that incomplete coverage for a single platform/version doesn't abort all of CI. I think the better experience for contributors is the complete picture of coverage when tests are passing.
Thinking about it briefly, failing coverage in the same job as tests will probably require allowing all tests to complete even if tests on one platform fail in order to allow all coverage for all platforms to be available even if only one platform/version fails.
So, some trade-offs involved, I suppose. I will say that checking coverage after the fact definitely increases how long CI runs....(at least until we can prevent having to install all dev
dependencies just to run coverage
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to wanting to provide a full analysis of coverage before failing, the only way to assess coverage across Python versions for a particular platform is to coalesce the coverage files at the end.
But this must be done using the OS for that platform. So, we end up with at least the 3 "Platform coverage" jobs alongside "Project coverage" after the tests.
At that point, if those 3 additional coverage jobs are necessary, the benefit of performing specific Python version coverage there becomes more palatable to 1) allow their failure to fail all of CI but not fail testing and 2) provide better consolidation of the coverage analysis in CI.
P.S.
But [platform-specific coverage] must be done using the OS for that platform
This is necessary in order for the new conditional coverage rules to work. They operate on sys.platform
to know when to require/ignore certain code blocks. We could consider spoofing this, though, to potentially allow coverage analysis of a platform while not on that platform. I can at least experiment with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This turned out to be a fruitful experiment; I was able to incorporate python version-specific and platform-specific coverage reporting in-line with the existing job matrices.
Although, this has two larger status-quo changes:
-
Unit tests no longer fast fail
- This is to prevent coverage reporting from failing a job and thus the whole matrix.
- I think ideally developers will have access to the full coverage information.
-
The Verify App Build tests will not run now until unit tests and python version-specific coverage is passing.
- Since failures for unit tests and python-specific coverage are represented by a single flag, either failure prevents app builds.
- I haven't found an existing flag to avoid this....of course, we could always create one, I guess.
finally: | ||
# Ensure the App also terminates when exiting | ||
if app_pid: | ||
if app_pid: # pragma: no-cover-if-is-py310 | ||
with suppress(ProcessLookupError): | ||
self.tools.os.kill(app_pid, SIGTERM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I...don't know what's happening on 3.10 here...
|
||
import pytest | ||
|
||
PYTHONPATH = "PYTHONPATH" | ||
PYTHONMALLOC = "PYTHONMALLOC" | ||
|
||
|
||
@pytest.mark.parametrize("platform", ["windows", "darwin", "linux", "wonky"]) | ||
def test_pythonmalloc_only_set_for_windows( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to get branch coverage on any individual platform.
Closing in favor of #1262 |
PR Checklist: